In a discussion about the need for AI regulation and transparent development practices with tech companies, Former President Barack Obama highlighted AI's potential risks and rewards and urged tech experts to engage in government roles to help shape thoughtful AI policies. The conversation also tackled First Amendment challenges and the necessity of a multi-faceted, adaptive regulatory approach for AI.
Wednesday, March 13, 2024The European Parliament has approved the world's first comprehensive AI regulatory framework, setting global standards for artificial intelligence, categorizing risks, and aiming for innovation alongside fundamental rights protection, with staggered implementation starting in 2025.
Microsoft and Apple will no longer have representatives in non-voting observer roles on OpenAI's board. Microsoft withdrew from the role while Apple opted not to take the position. OpenAI plans to update business partners and investors through regular meetings instead of board representation. The changes are likely a reaction to regulators in the EU and US increasing their scrutiny of Big Tech's investments in AI startups due to concerns about stifling competition.
California state Senator Scott Wiener's Safe and Secure Innovation for Frontier Artificial Intelligence Models bill requires companies training 'frontier models' that cost more than $100 million to do safety testing and be able to shut off their models in the event of a safety incident. The bill has received heavy criticism from the tech industry. It will affect everyone who does business in California, not just companies that develop their models in California. This article contains an interview with Wiener about the bill and its critics.
The White House says there is no need to regulate open source systems for AI in their current form and to allow the community to continue developing the technology.
California Governor Gavin Newsom has vetoed SB 1047, a significant bill aimed at regulating artificial intelligence development. Authored by State Senator Scott Wiener, the bill sought to impose liability on companies developing AI models, requiring them to implement safety protocols to mitigate potential "critical harms." The regulations would have specifically targeted models with a development cost of at least $100 million and those utilizing a substantial amount of computational power during training. The bill faced considerable opposition from various stakeholders in Silicon Valley, including prominent companies like OpenAI and influential technologists such as Yann LeCun, Meta's chief AI scientist. Even some Democratic politicians, including U.S. Congressman Ro Khanna, expressed concerns about the bill. Despite amendments made to address some of these concerns, the opposition remained hopeful that Newsom would ultimately veto the legislation, especially given his previous indications of reservations about it. In his veto statement, Newsom articulated that while the intentions behind SB 1047 were commendable, the bill failed to consider the context in which AI systems are deployed, particularly in high-risk environments or when sensitive data is involved. He criticized the bill for applying stringent standards to basic functions of AI systems, arguing that this approach would not effectively protect the public from genuine threats posed by the technology. Nancy Pelosi, a long-time Congresswoman and former House Speaker, also criticized the bill, labeling it as "well-intentioned but ill-informed." Following the veto, she commended Newsom for recognizing the need to empower smaller entrepreneurs and academic institutions rather than allowing large tech companies to dominate the field. In conjunction with the veto, Newsom's office highlighted his recent efforts to regulate AI technology, noting that he had signed 17 bills related to AI regulation in the past month. He has also sought guidance from experts in the field to help California establish effective frameworks for the responsible deployment of generative AI. In response to the veto, Senator Wiener expressed disappointment, describing it as a setback for those advocating for oversight of large corporations that make critical decisions impacting public safety and welfare. He emphasized that the discussions surrounding the bill have significantly advanced the conversation about AI safety on a global scale.